当前的领先错误发音检测和诊断(MDD)系统通过端到端音素识别实现有希望的性能。这种端到端解决方案的一个挑战是在自然L2语音上缺乏人类注销的音素。在这项工作中,我们通过伪标记(PL)程序利用未标记的L2语音,并扩展基于预先训练的自我监督学习(SSL)模型的微调方法。具体而言,我们使用WAV2VEC 2.0作为我们的SSL模型,并使用原始标记的L2语音样本以及创建的伪标记的L2语音样本进行微调。我们的伪标签是动态的,是由在线模型的合奏生成的,这确保了我们的模型对伪标签的噪声具有强大的功能。我们表明,使用伪标签进行微调可实现5.35%的音素错误率降低和2.48%的MDD F1得分在仅标签样本的基线基线。提出的PL方法还显示出优于常规的离线PL方法。与最先进的MDD系统相比,我们的MDD解决方案会产生更准确,一致的语音误差诊断。此外,我们对单独的UTD-4ACCENTS数据集进行了开放测试,在该数据集中,我们的系统识别输出基于重音和清晰度,与人类感知有着密切的相关性。
translated by 谷歌翻译
We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by $\tilde{O}\left( \frac{k n_x}{HN_{\mathrm{shared}}} + \frac{k n_u}{N_{\mathrm{target}}}\right)$, where $n_x > k$ is the state dimension, $n_u$ is the input dimension, $N_{\mathrm{shared}}$ denotes the total amount of data collected for each policy during representation learning, and $N_{\mathrm{target}}$ is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
开放程序代表全球手术的主要形式。人工智能(AI)有可能优化手术实践并改善患者结果,但努力主要集中在微创技术上。我们的工作通过策划,从YouTube,从YouTube,Open Surgical视频的最大数据集克服了培训AI模型的现有数据限制:1997年从50个国家上传的23个外科手术的视频。使用此数据集,我们开发了一种能够实时了解外科行为,手和工具的多任务AI模型 - 程序流程和外科医生技能的构建块。我们表明我们的模型推广了各种外科类型和环境。说明这种普遍性,我们直接应用了YouTube培训的模型,分析了在学术医疗中心前瞻性收集的开放式手术,并确定了与手动效率相关的外科技能的运动学描述符。我们的开放外科(AVOS)数据集和培训模式的注释视频将可用于进一步发展外科艾。
translated by 谷歌翻译
开普勒和苔丝任务产生了超过100,000个潜在的传输信号,必须处理,以便创建行星候选的目录。在过去几年中,使用机器学习越来越感兴趣,以分析这些数据以寻找新的外延网。与现有的机器学习作品不同,exoMiner,建议的深度学习分类器在这项工作中,模仿域专家如何检查诊断测试以VET传输信号。 exoMiner是一种高度准确,可说明的和强大的分类器,其中1)允许我们验证来自桅杆开口存档的301个新的外延网,而2)是足够的,足以应用于诸如正在进行的苔丝任务的任务中应用。我们进行了广泛的实验研究,以验证exoMiner在不同分类和排名指标方面比现有的传输信号分类器更可靠,准确。例如,对于固定精度值为99%,exoMiner检索测试集中的93.6%的所有外产网(即,召回= 0.936),而最佳现有分类器的速率为76.3%。此外,exoMiner的模块化设计有利于其解释性。我们介绍了一个简单的解释性框架,提供了具有反馈的专家,为什么exoMiner将运输信号分类为特定类标签(例如,行星候选人或不是行星候选人)。
translated by 谷歌翻译
使用深度自动化器来编码地震波形特征的想法,然后在不同的地震应用中使用它们是吸引人的。在本文中,我们设计了测试,以评估使用AutoEncoders作为不同地震应用的特征提取器的这种想法,例如事件辨别(即,地震与噪声波形,地震与爆炸波形和相位拣选)。这些测试涉及在大量地震波形上训练AutoEncoder,无论是均匀的还是超越,然后使用培训的编码器作为具有后续应用层的特征提取器(完全连接层,或卷积层加上完全连接的层)做出决定。通过将这些新设计模型的性能与从头开始培训的基线模型进行比较,我们得出结论,AutoEncoder特征提取器方法可以在某些条件下执行良好,例如当目标问题需要与AutoEncoder编码的功能类似,何时有相对少量的培训数据,并且当使用某些模型结构和培训策略时。在所有这些测试中最佳工作的模型结构是具有卷积层和完全连接的层的过度普遍的AutoEncoder,以进行估计。
translated by 谷歌翻译
脑转移性疾病的治疗决策依赖于主要器官位点的知识,目前用活组织检查和组织学进行。在这里,我们开发了一种具有全脑MRI数据的准确非侵入性数字组织学的新型深度学习方法。我们的IRB批准的单网回顾性研究由患者(n = 1,399)组成,提及MRI治疗规划和伽马刀放射牢房超过19年。对比增强的T1加权和T2加权流体减毒的反转恢复脑MRI考试(n = 1,582)被预处理,并输入肿瘤细分,模态转移和主要部位分类的建议深度学习工作流程为五个课程之一(肺,乳腺,黑色素瘤,肾等)。十倍的交叉验证产生的总体AUC为0.947(95%CI:0.938,0.955),肺类AUC,0.899(95%CI:0.884,0.915),乳房类AUC为0.990(95%CI:0.983,0.997) ,黑色素瘤ACAC为0.882(95%CI:0.858,0.906),肾类AUC为0.870(95%CI:0.823,0.918),以及0.885的其他AUC(95%CI:0.843,0.949)。这些数据确定全脑成像特征是判别的,以便准确诊断恶性肿瘤的主要器官位点。我们的端到端深度射出方法具有巨大的分类来自全脑MRI图像的转移性肿瘤类型。进一步的细化可以提供一种无价的临床工具,以加快对精密治疗和改进的结果的原发性癌症现场鉴定。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译